59 research outputs found

    New convergence results for the scaled gradient projection method

    Get PDF
    The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al. in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, an extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak, though very general, convergence theorem is provided, establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the O(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view

    A discrepancy principle for Poisson data: uniqueness of the solution for 2D and 3D data

    Get PDF
    This paper is concerned with the uniqueness of the solution of a nonlinear equation, named discrepancy equation. For the restoration problem of data corrupted by Poisson noise, we have to minimize an objective function that combines a data-fidelity function, given by the generalized Kullback–Leibler divergence, and a regularization penalty function. Bertero et al. recently proposed to use the solution of the discrepancy equation as a convenient value for the regularization parameter. Furthermore they devised suitable conditions to assure the uniqueness of this solution for several regularization functions in 1D denoising and deblurring problems. The aim of this paper is to generalize this uniqueness result to 2D and 3D problems for several penalty functions, such as an edge preserving functional, a simple case of the class of Markov Random Field (MRF) regularization functionals and the classical Tikhonov regularization

    A variable metric forward--backward method with extrapolation

    Full text link
    Forward-backward methods are a very useful tool for the minimization of a functional given by the sum of a differentiable term and a nondifferentiable one and their investigation has experienced several efforts from many researchers in the last decade. In this paper we focus on the convex case and, inspired by recent approaches for accelerating first-order iterative schemes, we develop a scaled inertial forward-backward algorithm which is based on a metric changing at each iteration and on a suitable extrapolation step. Unlike standard forward-backward methods with extrapolation, our scheme is able to handle functions whose domain is not the entire space. Both {an O(1/k2){\mathcal O}(1/k^2) convergence rate estimate on the objective function values and the convergence of the sequence of the iterates} are proved. Numerical experiments on several {test problems arising from image processing, compressed sensing and statistical inference} show the {effectiveness} of the proposed method in comparison to well performing {state-of-the-art} algorithms

    Variable metric inexact line-search based methods for nonsmooth optimization

    Get PDF
    We develop a new proximal-gradient method for minimizing the sum of a differentiable, possibly nonconvex, function plus a convex, possibly non differentiable, function. The key features of the proposed method are the definition of a suitable descent direction, based on the proximal operator associated to the convex part of the objective function, and an Armijo-like rule to determine the step size along this direction ensuring the sufficient decrease of the objective function. In this frame, we especially address the possibility of adopting a metric which may change at each iteration and an inexact computation of the proximal point defining the descent direction. For the more general nonconvex case, we prove that all limit points of the iterates sequence are stationary, while for convex objective functions we prove the convergence of the whole sequence to a minimizer, under the assumption that a minimizer exists. In the latter case, assuming also that the gradient of the smooth part of the objective function is Lipschitz, we also give a convergence rate estimate, showing the O(1/k) complexity with respect to the function values. We also discuss verifiable sufficient conditions for the inexact proximal point and we present the results of a numerical experience on a convex total variation based image restoration problem, showing that the proposed approach is competitive with another state-of-the-art method

    On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    Get PDF
    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications

    Application of cyclic block generalized gradient projection methods to Poisson blind deconvolution

    Get PDF
    The aim of this paper is to consider a modification of a block coordinate gradient projection method with Armijo linesearch along the descent direction in which the projection on the feasible set is performed according to a variable non Euclidean metric. The stationarity of the limit points of the resulting scheme has recently been proved under some general assumptions on the generalized gradient projections employed. Here we tested some examples of methods belonging to this class on a blind deconvolution problem from data affected by Poisson noise, and we illustrate the impact of the projection operator choice on the practical performances of the corresponding algorithm

    Hestenes method for symmetric indefinite systems in interior-point method

    Get PDF
    This paper deals with the analysis and the solution of the Karush-Kuhn-Tucker (KKT) system that arises at each iteration of an Interior-Point (IP) method for minimizing a nonlinear function subject to equality and inequality constraints.This system is generally large and sparse and it can be reduced so that the coefficient matrix is still sparse, symmetric and indefinite, with size equal to the number of the primal variables and of the equality constraints. Instead of transforming this reduced system to a quasidefinite form by regularization techniques used in available codes on IP methods, under standard assumptions on the nonlinear problem, the system can be viewed as the optimality Lagrange conditions for a linear equality constrained quadratic programming problem, so that Hestenes multipliers' method can be applied. Numerical experiments on elliptic control problems with boundary and distributed control show the effectiveness of Hestenes scheme as inner solver for IP methods

    A new semi-blind deconvolution approach for Fourier-based image restoration: an application in astronomy

    Get PDF
    The aim of this paper is to develop a new optimization algorithm for the restoration of an image starting from samples of its Fourier Transform, when only partial information about the data frequencies is provided. The corresponding constrained optimization problem is approached with a cyclic block alternating scheme, in which projected gradient methods are used to find a regularized solution. Our algorithm is then applied to the imaging of high-energy radiation emitted during a solar flare through the analysis of the photon counts collected by the NASA RHESSI satellite. Numerical experiments on simulated data show that, both in presence and in absence of statistical noise, the proposed approach provides some improvements in the reconstructions
    • …
    corecore